Machine Learning assisted Congestion Control

Published:

Machine Learning assisted Congestion Control.

  • We propose using a machine learning algorithm to determine a data sender’s sending rate (in bytes/sec) in a network. This innovative approach leverages ML’s ability to learn from patterns in historical data. The ML model makes decisions based on rules derived from data features. In the context of congestion control, features such as round-trip time, packet loss rate, previous window sizes, and throughput can be used to predict the optimal sending rate.

    It can incorporate UDP-based application features, such as time-sensitive delivery, throughput, and delay, and features from application-based UDP congestion control, such as the QUIC protocol. This prediction helps to balance throughput and network stability.

    Issue or problem that the research can solve:

    The earlier algorithms typically rely on a single parameter, such as packet loss rate, delay, throughput, explicit congestion notification, or time-sensitive delivery with these parameters. However, machine learning can consider multiple features simultaneously to make decisions.

  • I am collaborating with Prof Neelima Gupta at Delhi University on this project